Skip to content

Conversation

enriquedelpino
Copy link

No description provided.

@Paraphraser
Copy link

The first thing I want to say is I'm not in charge of anything. I don't make the rules and I don't decide whether PRs get accepted or rejected. I'm just another IOTstack user who is trying to help. I mostly respond to PRs and issues because I reckon there's nothing worse than putting in the effort to do something and getting nothing but silence in return.

Can I ask whether you've joined the IOTstack Discord channel? You might get help and guidance there too.

Speaking 100% personally, I don't "get" the whole idea of nginx. As far as I can tell, it's just a way of being able to:

  1. construct URLs pointing to raspberrypi.local/nodered or nodered.raspberrypi.local and have nginx figure out that that's really raspberrypi.local:1883; and
  2. secure everything behind Let's Encrypt certificates.

I've never seen the sense in URL rewriting. I reckon that's called "make a bookmark to raspberrypi.local:1883 and move on".

I've also never seen the sense in bothering with SSL certificates. Seems to me that implies exposing the Pi to the Internet and I'd far rather do that with a VPN like WireGuard or ZeroTier.

I'm explaining this so you understand that I'm approaching nginx from the viewpoint of someone who does not understand it, does not see the need for it, and who suspects that the reason why PR174 (gcgarner) and PR21 (SensorsIot) stalled is because my uninformed viewpoint is also the majority viewpoint. I keep hoping someone will explain why nginx is a good thing and why everyone should use it.

I did actually try to get nginx running at one point because I was intrigued. I gave up. The main reason was because of the need to futz with every single container I wanted to associate with nginx. I thought that was a Very Bad Idea. At the time I thought the most likely explanation why I failed to make progress was that I had headed in the wrong direction. I assumed that someone who knew more about nginx (which could be you) would be able to get a lot further. In other words, I like to try to keep an open mind.

So, please hear everything I'm saying as me still keeping an open mind. I'm going to try to help. My expertise, such as it is, is in IOTstack generally, not nginx, so IOTstack is my focus.

Also, what follows is going to sound like a long list of complaints. Please don't hear it that way. I've been working through what you've done and I've written down the things I've noticed along the way.

Yes, it's a long post and a huge brain-dump. Sorry. Can't be helped.

To be perfectly honest, I think the best course of action is to withdraw this PR and start over.

The main reason I say that is because I suspect (I don't know) that you have prepared your PR using a working clone of IOTstack. By "working", I mean a clone that is actually in use running your containers.

I'm also guessing that you are not running IOTstack on a "real" Raspberry Pi.

The reason I have those suspicions is because your PR includes commenting-out Raspberry Pi devices (ttyAMA0, vsio, gpiomem) in both the Home Assistant and Node-RED containers. That kind of change is going to break an awful lot of "real" Raspberry Pi implementations. Basically, you can't do that!

In the influxdb and tasmoadmin service definition, it looks to me like conflict resolution markers have become part of your commit. Those will create a bit of a mess too.

Also in the influxdb service definition, you seem to have removed image: influxdb:1.8 in favour of image: "influxdb:latest". What that's going to do is upgrade everyone to InfluxDB 2.0. Going from InfluxDB 1.8 to 2.0 is not as simple as just pulling a new image. That's why (a) InfluxDB is pinned to 1.8, (b) there's a separate service definition for InfluxDB 2.0, and (c) a whole lot of documentation explaining how to migrate. Basically, if this change made it into production, it would break every single IOTstack implementation that depends on InfluxDB 1.8.

Plus, the influxdb service definition has an env_file: clause so I suspect there's some confusion (either in your working system or your understanding of your working system) about the difference between old menu, new menu, and the experimental menu.

In my opinion, it is a really really bad idea to confuse what you want (your specific configuration) with what is appropriate for everyone (the "factory" configuration). Once you have a configuration working on your system, you need to take a step back and ask how that abstracts to something that can be made part of the factory configuration. And the best way to do that is with a clean clone of IOTstack that you add things into by copying from your working system.

You might find it helpful to read Preparing a Pull Request for IOTstack. Git has many different ways to skin every cat so I'm not going to say that that gist is the only or best way to go about creating pull requests, just that it might give you some things to think about.

Now, let's rule a line under that. If I was attacking this problem (nginx) and I reached the conclusion that three containers were needed, I'd make them a single service definition. Something like this:

  1. At the path:

    ~/IOTstack/.templates/nginx/service.yml
    
  2. Content:

    nginx-gen:
      container_name: nginx-gen
      image: ajoubert/docker-gen-arm
      restart: unless-stopped
      volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./volumes/nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
      - ./volumes/nginx-proxy/nginx-conf:/etc/nginx/conf.d
      - ./volumes/nginx-proxy/nginx-vhost:/etc/nginx/vhost.d
      - ./volumes/nginx-proxy/html:/usr/share/nginx/html
      - ./volumes/nginx-proxy/certs:/etc/nginx/certs:ro
      command: -notify-sighup nginx-proxy -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
    
    nginx-letsencrypt:
      container_name: nginx-letsencrypt
      image: budry/jrcs-letsencrypt-nginx-proxy-companion-arm
      restart: unless-stopped
      environment:
      - NGINX_DOCKER_GEN_CONTAINER=nginx-gen
      - NGINX_PROXY_CONTAINER=nginx-proxy
      volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./volumes/nginx-proxy/nginx-conf:/etc/nginx/conf.d
      - ./volumes/nginx-proxy/nginx-vhost:/etc/nginx/vhost.d
      - ./volumes/nginx-proxy/html:/usr/share/nginx/html
      - ./volumes/nginx-proxy/certs:/etc/nginx/certs:rw
    
    nginx-proxy:
      container_name: nginx-proxy
      image: nginx
      restart: unless-stopped
      network_mode: host
      x-ports:
      - "80:80"
      - "443:443"
      volumes:
      - ./volumes/nginx-proxy/nginx-conf:/etc/nginx/conf.d
      - ./volumes/nginx-proxy/nginx-vhost:/etc/nginx/vhost.d
      - ./volumes/nginx-proxy/html:/usr/share/nginx/html
      - ./volumes/nginx-proxy/certs:/etc/nginx/certs:ro
      - ./volumes/nginx-proxy/log:/var/log/nginx
      labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
      depends_on:
      - nginx-gen
      - nginx-letsencrypt
    

A couple of things I'll draw out:

  1. Although there's no "rule", most service definitions follow the convention of a title and container_name which matches the title, followed by the image. Then, the general order is the restart clause, any environment variables, then ports, then volumes, then other stuff specific to the container.

    IOTstack users aren't all "Information Technology" gurus. Sticking to common patterns saves those people from wondering/worrying whether order is significant (which it isn't).

    Patterns also help those with more experience to quickly spot any omissions.

  2. Your service definition for nginx-proxy has both a network_mode: host and an active set of port mappings. In the above, you'll see I've turned ports into x-ports which is the same as commenting-out the entire ports clause.

    With current versions of docker-compose, a host-mode container can't have a ports clause. docker-compose throws up an error and refuses to bring up the container.

    If you have a working configuration containing both a host-mode statement and a ports clause then your docker-compose is probably obsolete. You need to bring it up-to-date.

  3. When containers "need" each other (like these seem to) the best way to communicate that to docker-compose is with a depends_on clause - such as I've added to the proxy.

I tried to spin-up that configuration on a Raspberry Pi running 64-bit Bullseye. There were two complaints from docker-compose:

 ⠇ nginx-letsencrypt The requested image's platform (linux/arm) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested  0.0s
 ⠇ nginx-gen The requested image's platform (linux/arm) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested          0.0s

A couple of observations:

  1. IOTstack is primarily targeted at the Raspberry Pi. I have watched Andreas' recent video so that may well change. But right now the RPi is the focus so I think that finding images which work on the Raspberry Pi and in both 32- and 64-bit is part of the job. Personally, I "aim wide" looking for images with as wide a range of architectural support as possible (ie "more than Pi" is better than "Pi only").
  2. The budry image was last updated 5 years ago, the ajoubert 6 years ago. Granted budry has had a lot of pulls (ajoubert only 62). While I'd be the first to agree that age doesn't matter if something was designed right in the first place, it still suggests lack of support and that always worries me.

Although nginx-proxy seemed to start OK, the other two containers went into restart loops.

  • nginx-letsencrypt complained:

     Error: can't get my container ID !
    
  • nginx-gen complained:

     2022/12/30 01:17:45 Unable to parse template: read /etc/docker-gen/templates/nginx.tmpl: is a directory
     …
    

I have no idea what the "container ID" error is moaning about.

The "unable to parse" is traceable to:

- ./volumes/nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro

When docker-compose first instantiates a container, it runs down the list of volumes. For each path on the left hand side, docker-compose sees if that exists (file or directory). If it doesn't exist then it always creates the left-hand-side as a directory with root ownership.

I see you've also added a directoryfix.sh and I assume that's an attempt to fix this problem. The directoryfix.sh idea is an old-menu thing and was always a bad idea anyway. If you want a deeper understanding, please see Issue 331.

That issue also explains the "solution" to this class of problem, which is to use a Dockerfile.

If nginx.tmpl is common to all implementations then you should use Mosquitto as your example. The starting point would be something like:

- ./volumes/nginx-proxy/templates:/etc/docker-gen/templates

In other words, a directory rather than a file.

The Dockerfile copies the default nginx.tmpl into the image along with an adjusted entry-point script. At every launch the adjusted entry-point script checks whether /etc/docker-gen/templates/nginx.tmpl exists and, if not, copies the default into place. Note that this also implies you can't use the ":ro" flag because that directory must be writeable.

I have not checked the ajoubert/docker-gen-arm image to see if any of this is feasible - mainly because the first thing to sort out is whether that's the appropriate image.

If nginx.tmpl is always going to be specific to each user (it doesn't look like that's the case but it's a long file and hard to be sure) then you can use Node-RED as your example. That does copy the defaults from .templates into services, then runs the Dockerfile from services on the assumption that the user may need to tweak things and then rebuild the local image.

I'll emphasise that Issue 331 is 18 months old. It gives you the rationale for using Dockerfiles but the current Mosquitto and Node-RED implementations have moved on - there's now more control in the compose file when it comes to pinning versions etc. So that's current best practice.

The last thing I'll say on this topic is that you need do everything three times, once for "new menu" (master branch), again for "old menu" (old-menu branch), and again for "experimental menu" (experimental branch). With old menu, the service definitions are indented by two spaces (just as they appear in the compose file), whereas new and experimental menus are left-aligned (the menu re-adds the spaces as it generates the compose file). New menu finds service definitions as soon as you put them in .templates whereas old-menu needs two extra lines of code to be added to menu.sh.

I've never been able to get experimental menu to run so I just do a "best efforts" of putting service definitions, Dockerfiles and other defaults into the right places to minimise the overheads if/when work resumes on the experimental menu.

There may be other ways of doing this using Git/GitHub but I submit PRs in threes: one for each branch. Seems to work.

While I think of it, I try to avoid including comment lines in service definitions. At one point those created problems with merging (when the menu uses yaml APIs to merge templates into the compose file). I don't know if that's still true but I don't want to find out "the hard way". That's why I removed all the comments from your service definitions before I tested them.

I said I'd done some brief fiddling about with nginx before giving up. At the time I formed the impression that "labels" were what drove the show. So I don't understand how something like this works:

  environment:
    - VIRTUAL_HOST=~^nodered\..*\.xip\.io

That's sending VIRTUAL_HOST into the Node-RED container. I don't see how any of the nginx containers can be aware of it. Does it actually work? How?

Which leads me to ask another question. As I read that, it's matching on URLs of the form "nodered{.subdomain …}.xip.io" - yes? Isn't that a bit specific? I mean, it's specific to you, isn't it? How will other people get on with different domains?

So I suppose my final observation is that adding nginx to IOTstack might be best approached as:

  1. A single service definition with as many cooperating containers as are required;
  2. Using the "local Dockerfile" approach to tackle both first-time initialisation and self-repair if parts of the persistent storage go walkabout;
  3. Documentation explaining how to add labels (or environment variables - if those actually work) to the containers that users wish to access via a reverse proxy. The doco probably also needs to explain the Let's Encrypt process (getting certificates, installing them, rotating them, etc).

@Slyke
Copy link
Collaborator

Slyke commented Jan 5, 2023

nginx is its own beast. Another issue when rewriting URLs is that many applications don't like it when you rewrite URLs or pass HTTP through proxies. You often are required to add header rules and set specific things for specific applications. Some applications will not work at all ( Lissy93/dashy#1036 ) and need the developer to fix. I do use nginx on Kubernetes, but I run it with Authelia. That basically allows me to protect my stuff behind 2 factor auth. My mother in law can view the webcams I have setup so she can see her garden and stuff while out, we can keep an eye on the animals and that too. I can see if the furnace is running (or even turn it on and off remotely thanks to NodeRed). You're right when you say it can be achieved with a VPN as well, but I'm more comfortable with 2 factor. Apache is another alternative to nginx.

nginx should have (editable) templates setup so the end user doesn't have to figure out all the correct http headers an application needs to run. Things like Grafana only require one or 2. Other ones like PiHole are a whole rabbit hole of issues when set behind nginx and requires documentation.

I can also see git tokens (such as ======) in the PR, a merge conflict wasn't resolved correctly.

@Paraphraser
Copy link

Please see PR #648 - I needed to make some changes to the template service definition for PiHole and I took the opportunity to re-align some things. Apologies if this causes any conflicts with your changes to the same file.

Paraphraser added a commit to Paraphraser/IOTstack that referenced this pull request May 27, 2025
Adds Nginx template and documentation.

I do not know whether it is possible for a later PR to close an earlier
PR, in the same way that a PR can mark an issue for closure. Neither do
I know whether it is appropriate GitHub etiquette to even try. I will
simply say that, in my view, this PR supersedes any need for SensorsIot#638 and
that, providing @enriquedelpino (the creator) and @robertcsakany (who
made several contributions) do not object, I recommend SensorsIot#638 be closed.

The Nginx container being proposed in this PR is a self-contained
all-in-one solution. It is a single template and does not touch any
other service definitions. That compares/contrasts with SensorsIot#638 which was
spread across three service definitions, touched 15 existing service
definitions and, I infer, would have implied similar changes to the
service definitions of any "proxyable" (if that's a word) containers
added subsequently.

I have been testing the `jc21/nginx-proxy-manager` for the past couple
of months. I won't claim to have given it a full workout because my
testing has been limited to self-signed SSL certificates (ie no Let's
Encrypt) and I have only defined "proxy hosts" (ie no "redirection
hosts", "streams" or "404 hosts", and no "access lists").

The proxy hosts that I have defined include a judicious mix of HTTP and
HTTPS services, running on the same and different hosts, and running in
both host mode and non-host mode. I have also tested in conjunction
with CNAME records defined by both PiHole and BIND9.

The Nginx service as implemented by the `jc21` Docker image works and
is reliable. The only problems I have found are:

1. A situation where obsolete private SSL certificates are not removed
   from the database when they are deleted. This was filed as
   [Issue 4442](NginxProxyManager/nginx-proxy-manager#4442).

2. The procedure for the "forgot password" use case is not exactly well
   documented. For example it's buried in places like
   [Issue 230](NginxProxyManager/nginx-proxy-manager#230).
   It's also a little bit coarse in that it kills **all** user records.
   Granted, in most IOTstack environments there will only be one user
   anyway but it's still poor practice in an SQL sense and I'd rather
   not perpetuate it. The documentation included with this PR adopts
   the approach of resetting the password of the problematic account to
   a known value.

Signed-off-by: Phill Kelley <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants